# Tokenization Optimization
Roberta Base Word Chinese Cluecorpussmall
A Chinese tokenized version of the RoBERTa medium model pre-trained on CLUECorpusSmall corpus, with tokenization processing to enhance sequence handling efficiency
Large Language Model Chinese
R
uer
184
9
Roberta Tiny Word Chinese Cluecorpussmall
A Chinese word-based RoBERTa medium model pre-trained on CLUECorpusSmall, featuring an 8-layer 512-hidden architecture with superior performance and faster processing speed compared to character-based models
Large Language Model Chinese
R
uer
17
3
Chinese Bigbird Mini 1024
Apache-2.0
This is a Chinese pre-trained model based on the BigBird architecture, optimized for Chinese text processing and supporting long text sequence handling.
Large Language Model
Transformers Chinese

C
Lowin
14
1
Featured Recommended AI Models